Message Passing and Threads

نویسندگان

  • Ian Foster
  • William Gropp
  • Carl Kesselman
چکیده

In this chapter we examine two fundamental, although low-level, approaches to expressing parallelism in programs. Over the years, numerous different approaches to designing and implementing parallel programs have been developed (e.g., see the excellent survey article by Skillicorn and Talia [870]). However, over time, two dominant alternatives have emerged: message passing and multithreading. These two approaches can be distinguished in terms of how concurrently executing segments of an application share data and synchronize their execution. In message passing, data is shared by explicitly copying (“sending”) it from one parallel component to another, while synchronization is implicit with the completion of the copy. In contrast, the multithreading approach shares data implicitly through the use of shared memory, with synchronization being performed explicitly via mechanisms such as locks, semaphores and condition variables. As with any set of alternatives, there are advantages and disadvantages to each approach. Multithreaded programs can be executed particularly efficiently on computers that use physically shared memory as their communication architecture. However, many parallel computers being built today do not support shared memory across the whole computer, in which case the message-passing approach is more appropriate. From the perspective of programming complexity, the implicit sharing provided by the shared-memory model simplifies the process of converting existing sequential code to run on a parallel computer. However, the need for explicit synchronization can result in errors that produce nondeterministic race conditions that are hard to detect and correct. On the other hand, converting a program to use message passing requires more work up front, as one must ex-

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generalized Communicators in the Message Passing Interface

We propose extensions to the Message Passing Interface (MPI) that generalize the MPI communicator concept to allow multiple communication endpoints per process, dynamic creation of endpoints, and the transfer of endpoints between processes. The generalized communicator construct can be used to express a wide range of interesting communication structures, including collective communication opera...

متن کامل

Designing Graphical User Interfaces with Entity-life Modeling

This paper explores using entity-life modeling (ELM) as an e ective method for designing concurrent graphical user interfaces in a message processing environment. In ELM, multiple threads of control in the software are modeled on threads of events in the problem environment, and software objects are modeled on objects in the problem. The goal is to identify a minimum number of threads in the pr...

متن کامل

Thread Migration with Active Threads

6 Acknowledgments Acknowledgments I want to take the time to thank all the people who helped to establish this thesis. The first one to mention is Jürgen Quittek. Thanks for all his tips and support and our discussions that helped me finding my way and thanks for the patience he always had with me. Thanks to the Sather group at the ICSI for their support. A special thanks to Boris Weissman for ...

متن کامل

PtTcl: Using Tcl with Pthreads

Tcl is not thread-safe. If two or more threads attempt to use Tcl at the same time, internal data structures can be corrupted and the program can crash. This is true even if the threads are using separate Tcl interpreters. PtTcl is a modi cation to the Tcl core that makes Tcl safe to use with POSIX threads. With PtTcl, each thread can create and use its own Tcl interpreters that will not interf...

متن کامل

Scalable Performance for Scala Message-Passing Concurrency

This paper presents an embedded domain-specific language for building massively concurrent systems. In particular, we demonstrate how ultra-lightweight cooperatively-scheduled processes and message-passing concurrency can be provided for the Scala programming language on the Java Virtual Machine (JVM). We make use of a well-known continuation-passing style bytecode transformation in order to ac...

متن کامل

CS 5220: Project 3 - All Pairs Shortest Path

We implement a modified Floyd-Warshall algorithm to solve the all-to-all shortest paths problem in O(n3 logn) time using MPI. Through the use of non-blocking round-robin message passing and other memory optimizations, we show super-linear speedup for a 4000 node graph problem, achieving a 23x speedup for 8 threads. In contrast, our reference OpenMP implementation achieved a 5x speedup for the s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006